The Gap Is Always There

 

What Every Engineer Should Know About FEA

A Guide for Engineers and Designers Who Commission Simulation Work

 

PART 4

Drop Simulation

The Gap Between Model and Reality

 

Joseph P. McFadden, Sr.

The Holistic Analyst

McFaddenCAE.com

2026


 


The Gap Is Always There

 

Three parts into this series, a foundation has been established. The simulation is a model — a carefully built approximation of your design and the event it experiences. The model is constructed from inputs that carry uncertainty, and the results it produces should be read with engaged, critical attention rather than passive acceptance.

This part is where that foundation becomes most important, because now we talk directly about something that is too often left implicit in a simulation review: the gap between what the model predicts and what the physical product actually does is always present. The question is never whether it exists. The question is how large it is, where it is concentrated, and what it means for how much confidence you should place in the results.

Knowledge of the gap is what produces calibrated confidence — not false certainty, and not unfounded skepticism. It is the mark of an engineer who uses simulation well.

This part is not a case against simulation. It is a case for using it with the right expectations. An engineer who understands where and why models diverge from physical reality is an engineer who draws better conclusions from simulation results than one who treats the output as a precise measurement of the physical world.

 

Gap Source One: Geometry and Manufacturing Variation

 

The most fundamental source of the gap between model and reality is that the model is a simplification. Every decision made during model construction — which geometric features to preserve, which to remove, what mesh density to apply, how to represent connections between parts — introduces some degree of approximation. Each individual approximation is made with engineering judgment and is typically defensible. Taken together, they accumulate into a model that is a considered representation of the real product, not a replica of it.

Within the geometry category, however, the most consequential gap is frequently not the features that were simplified during model preparation. It is the manufacturing variation present in every real part that was never in the model at all.

Nominal Versus Actual

The simulation model is built from nominal cad geometry — nominal wall thicknesses, nominal fillit radii, nominal boss heights and feature dimensions. Real injection-molded parts are not nominal. Wall thickness varies across a part due to mold tool wear, injection pressure gradients, cooling rate differences across the tool, and process parameter variation from shot to shot. Under typical production conditions, a nominal wall thickness of three millimeters may vary by a tenth to two tenths of a millimeter across a production run — modest in absolute terms, but meaningful in its effect on structural performance at locations where the design is already working close to its limit.

Wall thickness has a direct and significant effect on structural stiffness and on peak stress under bending loads. Even a wall that is modestly thinner than nominal is more flexible and will carry higher bending stress under the same load, because the same force is acting on a reduced cross-section. The simulation models the nominal geometry. The parts that get dropped — in the test lab and in the field — have actual dimensions that differ from that nominal in ways the simulation does not account for. Where margins are comfortable, that difference is absorbed. Where margins are thin, it matters.

The practical implication:

When the simulation shows a thin margin at a thin wall section — a factor of safety close to one — that result deserves extra scrutiny. The physical parts may be operating closer to their limit than the nominal model suggests, or farther from it, depending on how the manufacturing variation runs. A well-calibrated analyst will flag thin margins at geometry-sensitive locations. A well-engaged engineer will ask about them and understand what the margin means in the context of real production variability.

 

Gap Source Two: Material Properties

 

Materials are the second major source of the gap between model and reality, and within this category there are two distinct problems that affect drop simulations in particular ways.

Database Properties Versus Production Material

The material properties used in a simulation come from some combination of published datasheets, material supplier specifications, and coupon-level test programs. All of those sources characterize the material under controlled laboratory conditions — specific specimen geometries, specific processing conditions, specific environmental conditioning protocols. They represent the material as the material supplier intends it to be.

Production parts are made from material drawn from specific supplier lots, processed on specific press equipment, at specific melt temperatures, with specific injection speeds and cooling cycles. The as-molded material in a production part has properties that reflect its actual processing history — residual stress from cooling, molecular orientation from flow, fiber orientation in filled grades, and potential degradation from moisture exposure or regrind content. The degree to which those as-processed properties differ from the datasheet characterization varies by material and by process control, but the difference is always present to some degree.

Fiber-Reinforced Materials: The Direction Problem

Glass-filled and carbon-filled polymers present an additional complication that many simulation models handle imperfectly. The stiffness and strength of a glass-filled nylon, for example, are strongly dependent on the local orientation of the reinforcing fibers — and that orientation is set by the flow field of the melt during injection, which varies across the part geometry. Material near a gate flows differently from material that has traveled across a long, thin rib to fill a remote section. The fiber orientation in the finished part is a complex three-dimensional field that depends on part geometry, gate location, and processing conditions.

A simulation that assigns isotropic material properties to a glass-filled component — the same stiffness and strength in all directions — is making a simplification that may be acceptable or significant depending on where the critical load paths are and what the local fiber orientation happens to be. High-fidelity simulation of fiber-reinforced parts requires coupling the injection molding simulation results to the structural model, which is a more intensive effort that is not always warranted but is sometimes essential for accurate predictions in heavily loaded components.

Rate Dependence: The Behavior Under Speed

The loading rate at which a drop event occurs is orders of magnitude faster than the loading rates used in standard quasi-static material testing. Many polymers and some metals exhibit rate-dependent behavior — their stiffness, yield strength, and failure characteristics change as a function of how quickly they are deformed. Some materials are tougher under impact loading than their quasi-static properties suggest. Others are more brittle. The correct material characterization for a drop simulation uses properties measured at loading rates representative of the drop event — and those rate-dependent properties are frequently not available in standard material databases.

When a simulation uses quasi-static properties for a dynamic event, the stress predictions may carry a systematic bias across all high-stress locations. This is one of the patterns most worth looking for when comparing simulation results to physical test outcomes. A model that consistently over-predicts or under-predicts failure across multiple test conditions points toward the material characterization as the most likely source — and refining the material model is usually the highest-leverage improvement available to the analyst.

 

The Root of the Gap: Design, Manufacturing, and the Material's True Nature

 

Stepping back from the individual gap categories, there is a systemic root cause of significant mismatch between simulation predictions and real-world performance that deserves to be named plainly. Across decades of CAE work, this pattern appears more consistently than any other.

The deepest gap is almost never in the simulation method itself. It is in the disconnect between design and manufacturing — and in a shared lack of understanding of what the material truly is by the time it becomes a finished part.

You cannot process out a bad design. And poor processing will process out a good design.

The design team creates a geometry, specifies a material, and defines tolerances. The manufacturing team turns that design into a physical part using real equipment, real tooling, and real process conditions that vary from shift to shift and lot to lot. The material that enters the press has properties. The part that comes out has different properties — shaped by temperature history, flow field, cooling rate, and pressure distribution across the tool. The material in the datasheet is the starting point of the material in the part, not the final description of it.

When a simulation fails to predict what the physical product does, the most productive question is not: what did the analyst get wrong? The most productive question is: does the model represent what was actually built — not what was designed? A design that looks robust on paper — good margins, generous wall sections, careful attention to stress concentrations — can be undermined entirely by a manufacturing process that introduces voids, weld lines in critical locations, residual stresses from cooling, or fiber orientation distributions that bear no resemblance to the isotropic assumption in the model. The simulation predicted the design. The test evaluated what was made. Those are not always the same thing.

Why the Holistic Approach Exists

This systemic disconnect is why bringing designers, manufacturing engineers, material suppliers, and CAE analysts together — giving them a shared vocabulary and a genuine appreciation for each other's work and goals — produces outcomes that no single discipline can achieve in isolation. Not because any one group is failing at its job, but because each carries knowledge the others need.

The designer may not know what the manufacturing process does to the material at the part level. The manufacturing engineer may not fully understand what structural performance the design is depending on from the material. The analyst may not know how the part that actually gets tested differs from the nominal cad model. When those groups communicate — when they develop real appreciation for what each discipline contributes and what each requires — the gap between simulation and physical product narrows. Not through better software. Through better conversation.

This series is part of that conversation. Every part of it has been aimed at giving the designer and the engineer a deeper appreciation of what the analyst's work involves — and what it needs from them to be truly useful. That is the holistic approach in practice. Not any single discipline doing better in isolation. All of them doing better together.

 

Gap Source Three: Contact Behavior

 

Contact — the physical interaction between surfaces during the drop event — is among the most technically demanding aspects of a drop simulation to represent accurately, and one of the most significant sources of divergence between model predictions and physical test results.

The floor surface in a typical simulation is modeled as perfectly rigid and perfectly flat. In physical reality, the floor has surface texture. The housing has surface irregularities. At the moment of contact, the interaction begins at discrete asperity contacts before developing into distributed contact across the nominal surface. The effective friction coefficient between a plastic housing and a concrete floor at impact velocity differs from the coefficient measured in a quasi-static sliding test. The way the contact force develops in time — how quickly full contact is established, how the load distributes across the contact footprint — depends on all of these factors.

Within the model, the contact behavior is governed by parameters that the analyst must specify: the friction coefficient at each interface, the contact stiffness, the algorithmic approach to detecting and enforcing contact constraints. These choices have real sensitivity. A change in the assumed friction coefficient at the housing-floor interface can shift the stress distribution at the impact face meaningfully. An experienced analyst will have made considered choices for these parameters and should be able to describe those choices and their rationale when asked.

A useful question for any drop simulation review:

What friction coefficient was assumed at the primary contact interface, and was any sensitivity study performed around that value? The answer tells you both how carefully the contact setup was considered and how sensitive the results are likely to be to the contact assumptions.

 

Gap Source Four: Variability in the Drop Event Itself

 

Every simulation run represents one precisely defined scenario: a specific drop height, a specific impact orientation, a specific surface, a specific product configuration. Real drop events — in the test lab and in the field — are not repeatable in the same sense. The product rotates slightly as it falls. It lands at an angle that differs from the nominal orientation by a few degrees. The person who dropped it may have been walking, or their hand position may have imparted a slight spin. The floor surface is not perfectly uniform across its area.

This variability means that the simulation represents a point estimate within a distribution of possible drop scenarios. Some variations around the nominal case produce less severe loading. Others shift the load path in ways that expose different structural vulnerabilities — moving the peak stress from a robust location to a more vulnerable one with a small change in impact angle.

This is a strong argument for analyzing multiple drop orientations, not just the single nominal case specified in the test standard. A well-structured drop simulation program evaluates the orientations that are most likely to produce structural problems, informed by knowledge of the design's geometry and any prior test experience. An analyst who presents results for only one orientation has answered a specific question accurately; they may not have answered the most important question for your design.

 

Gap Source Five: Unmodeled Features

 

The most consequential gap in a drop simulation is sometimes one that does not show up as a discrepancy between simulation and test until a test failure occurs that the model did not predict. This is the gap created by features that exist in the physical product but are absent from the simulation.

The list of commonly unmodeled features in consumer and commercial product development is consistent across product families. Cable harnesses routed between components and clipped loosely to internal features change the mass distribution and the coupling between components during impact. Adhesive labels and protective films on external surfaces alter the effective stiffness of thin housing walls. Potting compounds and encapsulants inside modules add stiffness and damping that are frequently omitted because their geometry is complex and their material properties are uncertain. Interference fits between assembled components add stiffness to the assembly that a model with simple contact interfaces will not capture. Thread engagement between fastened parts affects load transfer in ways that nominal fixed connections do not represent.

The analyst cannot model what they do not know exists or cannot characterize. This is the category of the gap most directly addressed by the engineering partnership. Your knowledge of the real assembly — the full bill of materials, the assembly sequence, the features that differentiate the production product from the nominal CAD model — is the resource that closes this particular gap. The conversation between analyst and engineer before the model is built, not after the results come back, is where this gap is narrowed most effectively.

 

Simulation and Testing Are Partners, Not Competitors

 

The gap between model and physical reality does not make simulation less valuable. It defines the relationship between simulation and physical testing — and that relationship, understood correctly, is one of the most powerful tools available in product development.

Physical drop testing has its own limitations. A drop test tells you whether the specific test article, dropped in the specific orientation specified by the test procedure, survived or failed. It does not tell you the margin — how far the design was from the failure threshold in either direction. It does not identify which location in the structure initiated failure and through what mechanism. It does not give you the stress distribution across the parts that did not fail, which you would need to understand the full structural behavior and identify other potential vulnerabilities. And running multiple conditions — different heights, different orientations, different configurations — requires multiple test articles, significant test time, and corresponding cost.

Simulation provides what testing cannot economically provide: the full spatial distribution of stress, displacement, and acceleration across every element of the model, for every condition analyzed. It tells you the margin at every location. It shows you the load path. It lets you evaluate ten design variations in the time it would take to physically build and test two.

Simulation and physical testing are partners. Simulation provides the insight that testing cannot economically replicate. Testing provides the validation that simulation cannot provide from first principles alone.

The most reliable engineering decisions come from using both methods in conjunction — using simulation to understand structural behavior and screen design options early, and using targeted physical testing to validate the simulation's critical predictions before the design is committed. When a simulation has been correlated to physical test data — even partially, at the level of failure location and approximate failure load — its predictions carry much greater weight than one that has never been compared to physical evidence.

 

The Correlation Discussion: The Most Important Conversation in the Room

 

If physical test data exists for this product or for a predecessor product with similar structural architecture, the most valuable conversation in a simulation review is the direct comparison between what the model predicted and what the test revealed. Where did the model agree with the test outcome? Where did it not? When discrepancies exist, which modeling assumption is the most likely source?

A simulation that has been correlated to test data — even at a high level, even imperfectly — is a qualitatively more reliable tool than one that has not. Correlation does not require that the model reproduce the test result exactly. It requires that the analyst and the engineer have looked at both together, that they understand the pattern of agreement and disagreement, and that the results are being interpreted with that pattern explicitly in view.

If correlation data has not been brought to the review, ask for it. Ask whether any physical drop testing has been done on this product or a related one. Ask where the model has been compared to physical evidence and what was learned. Ask whether the predicted failure mode — if the model predicts failure — is consistent with any failure modes observed in prior testing. These questions are not challenges to the analyst's competence. They are the substance of the engineering conversation that a simulation review exists to produce.

 

Calibrated Confidence: What It Looks Like in Practice

 

An engineer who has internalized the ideas in this part thinks about simulation results differently from one who has not. They neither dismiss the gap nor are paralyzed by it. They use it as information that shapes how they weight the evidence in front of them.

They know which results are robust to the dominant modeling assumptions and which are sensitive to them. A predicted failure at a location with thick, well-characterized material, under a loading mode the model captures well, warrants more confidence than a marginal result at a thin wall section in a glass-filled component where the fiber orientation is unknown and no rate-dependent material data was available.

They know whether the simulation has been validated against physical evidence in this product family, and they weight the predictions accordingly. A model that has correctly predicted failure location and approximate failure load in a previous generation of a product earns more trust for the current generation than a model being used for the first time on a new architecture without prior correlation.

They make design decisions proportionate to that understanding. Comfortable margin at validated locations supports confidence in proceeding. Thin margin at uncertain locations is a signal to either improve the model, run a targeted test, or add design margin before committing — not necessarily all three, but at least one of them. The calibrated engineer knows which response is appropriate for the degree of uncertainty actually present.

 

Coming Up in Part 5

 

Part Five closes the loop. The model has been built, the results have been read, and the gap has been understood. Part Five addresses the question the entire series has been building toward: how do you take what the simulation is telling you and use it to make a better design? We will walk through the practical process of moving from simulation findings to design decisions — the conversations to have, the changes to evaluate, and the discipline of knowing when the simulation has told you what you need to know.

 

— End of Part 4 —

 

© 2026 Joseph P. McFadden, Sr.  |  The Holistic Analyst  |  McFaddenCAE.com

Freely shared for the engineering community. Not for resale.

Previous
Previous

From Numbers to Meaning

Next
Next

The Moment of Translation